Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities.\nAn automated sign recognition system requires two main courses of action: the detection of particular features and the\ncategorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put\nforward for improving system performance. However, the recent progress in the computer vision field has geared us towards the\nfurther exploration of hand signs/gesturesâ?? recognition with the aid of deep neural networks. The Arabic sign language has\nwitnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based\nsystem by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is\nproposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the\nArabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters\nwhich assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures\nrecognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will\nbe fed to the text into the speech engine which produces the audio of the Arabic language as an output.
Loading....